多级分类的理论分析已经证明,现有的多级分类方法可以在测试集上具有高分类精度的分类器训练,而当实例在培训和测试集中具有相同分布的训练和测试集时,并且可以进行足够的实例,并且可以是足够的实例。收集在训练集中。但是,尚未解决一个多级分类的限制:在仅可用观察值时,如何提高多类分类问题的分类准确性。因此,在本文中,我们提出了一个新颖的框架,以解决一个新的现实问题,称为多级分类,具有不精确的观察结果(MCIMO),我们需要在其中培训一个具有模糊性观察的分类器。首先,我们基于模糊的Rademacher复杂性对MCIMO问题进行了理论分析。然后,基于支持向量机和神经网络的两种实用算法被构建以解决拟议的新问题。关于合成和现实世界数据集的实验验证了我们的理论分析的合理性以及所提出的算法的疗效。
translated by 谷歌翻译
使用增强现实(AR)用于导航目的,这表明在手术手术过程中协助医生有益。这些应用通常需要知道外科手术工具和患者的姿势,以提供外科医生在任务执行过程中可以使用的视觉信息。现有的医学级跟踪系统使用放置在手术室内的红外摄像头(OR)来识别感兴趣的对象附加并计算其姿势的复古反射标记。一些市售的AR头式显示器(HMD)使用类似的摄像头进行自定位,手动跟踪和估算对象的深度。这项工作提出了一个使用AR HMD的内置摄像机来准确跟踪复古反射标记的框架,例如在手术过程中使用的标记,而无需集成任何其他组件。该框架还能够同时跟踪多个工具。我们的结果表明,横向翻译的准确度为0.09 +-0.06毫米,可以实现标记的跟踪和检测,纵向翻译的0.42 +-0.32 mm,绕垂直轴旋转的0.80 +-0.39 ver。此外,为了展示所提出的框架的相关性,我们在手术程序的背景下评估了系统的性能。该用例旨在在骨科过程中复制K-Wire插入的场景。为了进行评估,为两名外科医生和一名生物医学研究人员提供了视觉导航,每次都进行了21次注射。该用例的结果提供了与基于AR的导航程序报告的相当精度。
translated by 谷歌翻译
在图像中检测人对象相互作用(HOI)是迈向高级视觉理解的重要一步。现有工作通常会阐明改善人类和对象检测或互动识别。但是,由于数据集的局限性,这些方法倾向于在检测到的对象的频繁相互作用上非常适合,但在很大程度上忽略了稀有的对象,这被称为本文中的对象偏置问题。在这项工作中,我们第一次从两个方面揭示了问题:不平衡的交互分布和偏见的模型学习。为了克服对象偏置问题,我们提出了一种新颖的插件插件,以对象的偏差记忆(ODM)方法来重新平衡检测到的对象下的交互分布。拟议的ODM配备了精心设计的读写策略,可以更频繁地对训练进行稀有的互动实例,从而减轻不平衡交互分布引起的对象偏差。我们将此方法应用于三个高级基线,并在HICO-DET和HOI-COCO数据集上进行实验。为了定量研究对象偏置问题,我们主张一项新协议来评估模型性能。正如实验结果所证明的那样,我们的方法对基准的一致和显着改善,尤其是在每个物体下方的罕见相互作用上。此外,在评估常规标准设置时,我们的方法在两个基准测试中实现了新的最新方法。
translated by 谷歌翻译
人类对象的相互作用(HOI)检测在场景理解的背景下受到了很大的关注。尽管基准上的进步越来越高,但我们意识到现有方法通常在遥远的相互作用上表现不佳,其中主要原因是两个方面:1)遥远的相互作用本质上比亲密的相互作用更难以识别。一个自然的场景通常涉及多个人类和具有复杂空间关系的物体,从而使远距离人对象的互动识别很大程度上受到复杂的视觉背景的影响。 2)基准数据集中的远处相互作用不足导致这些实例的合适。为了解决这些问题,在本文中,我们提出了一种新型的两阶段方法,用于更好地处理HOI检测中的遥远相互作用。我们方法中的一个必不可少的组成部分是一个新颖的近距离注意模块。它可以在人类和物体之间进行信息传播,从而熟练考虑空间距离。此外,我们设计了一种新颖的远距离感知损失函数,该功能使模型更加专注于遥远而罕见的相互作用。我们对两个具有挑战性的数据集进行了广泛的实验-HICO-DET和V-COCO。结果表明,所提出的方法可以通过很大的利润来超越现有方法,从而导致新的最新性能。
translated by 谷歌翻译
将作为上下文知识获得的偏见单词合并对于许多自动语音识别(ASR)应用至关重要。本文建议将图形神经网络(GNN)编码用于端到端上下文ASR中的树受限指针生成器(TCPGEN)组件。通过用基于树的GNN编码前缀树中的有偏见的单词,可以在每个树节点上通过合并有关其扎根的树枝上的所有文字的信息来实现端到端ASR解码中未来文字的lookahead,从而实现。允许更准确地预测偏见单词的生成概率。使用模拟的偏置任务在Librispeech语料库上评估系统,并通过提出一种新颖的视觉接地上下文ASR管道,在AMI语料库上评估了系统,该管道从每次会议旁边的幻灯片中提取有偏见的单词。结果表明,与原始TCPGEN相比,具有GNN编码的TCPGEN对偏置单词的相对减少了约15%,而解码的计算成本的增加可忽略不计。
translated by 谷歌翻译
在自然性和可读性方面,韵律边界在文本到语音综合(TTS)中起着重要作用。但是,获得韵律边界标签的获取依赖于手动注释,这是昂贵且耗时的。在本文中,我们建议通过带有预训练的音频编码器的神经文本语音模型自动从文本审计数据中提取韵律边界标签。该模型分别对文本和语音数据进行了预先训练,并以三重态格式对TTS数据进行了微调:{语音,文本,韵律}。自动评估和人类评估的实验结果表明:1)提出的文本言论韵律注释框架极大地超过了文本基本线;2)自动韵律边界注释的质量与人类注释相当;3)经过模型通知边界训练的TTS系统比使用手动系统的系统要好得多。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译